13 research outputs found

    A Backward-traversal-based Approach for Symbolic Model Checking of Uniform Strategies for Constrained Reachability

    Full text link
    Since the introduction of Alternating-time Temporal Logic (ATL), many logics have been proposed to reason about different strategic capabilities of the agents of a system. In particular, some logics have been designed to reason about the uniform memoryless strategies of such agents. These strategies are the ones the agents can effectively play by only looking at what they observe from the current state. ATL_ir can be seen as the core logic to reason about such uniform strategies. Nevertheless, its model-checking problem is difficult (it requires a polynomial number of calls to an NP oracle), and practical algorithms to solve it appeared only recently. This paper proposes a technique for model checking uniform memoryless strategies. Existing techniques build the strategies from the states of interest, such as the initial states, through a forward traversal of the system. On the other hand, the proposed approach builds the winning strategies from the target states through a backward traversal, making sure that only uniform strategies are explored. Nevertheless, building the strategies from the ground up limits its applicability to constrained reachability objectives only. This paper describes the approach in details and compares it experimentally with existing approaches implemented into a BDD-based framework. These experiments show that the technique is competitive on the cases it can handle.Comment: In Proceedings GandALF 2017, arXiv:1709.0176

    Comparing approaches for model-checking strategies under imperfect information and fairness constraints

    Get PDF
    Starting from Alternating-time Temporal Logic, many logics for reasoning about strategies in a system of agents have been proposed. Some of them consider the strategies that agents can play when they have partial information about the state of the system. ATLKirF is such a logic to reason about uniform strategies under unconditional fairness constraints. While this kind of logics has been extensively studied, practical approaches for solving their model- checking problem appeared only recently. This paper considers three approaches for model checking strategies under partial observability of the agents, applied to ATLKirF . These three approaches have been implemented in PyNuSMV, a Python library based on the state-of- the-art model checker NuSMV. Thanks to the experimental results obtained with this library and thanks to the comparison of the relative performance of the approaches, this paper provides indications and guidelines for the use of these verification techniques, showing that different approaches are needed in different situations

    Reasoning about memoryless strategies under partial observability and unconditional fairness constraints

    Get PDF
    Alternating-time Temporal Logic is a logic to reason about strategies that agents can adopt to achieve a specified collective goal. A number of extensions for this logic exist; some of them combine strategies and partial observability, some others include fairness constraints, but to the best of our knowledge no work provides a unified framework for strategies, partial observability and fairness constraints. Integration of these three concepts is important when reasoning about the capabilities of agents without full knowledge of a system, for instance when the agents can assume that the environment behaves in a fair way. We present ATLKirF, a logic combining strategies under partial observability in a system with fairness constraints on states. We introduce a model-checking algorithm for ATLKirF by extending the algorithm for a full-observability variant of the logic and we investigate its complexity. We validate our proposal with an experimental evaluation

    Reasoning about strategies under partial observability and fairness constraints

    Get PDF
    A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose AT LK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for i

    Rich Counter-Examples for Temporal-Epistemic Logic Model Checking

    Full text link
    Model checking verifies that a model of a system satisfies a given property, and otherwise produces a counter-example explaining the violation. The verified properties are formally expressed in temporal logics. Some temporal logics, such as CTL, are branching: they allow to express facts about the whole computation tree of the model, rather than on each single linear computation. This branching aspect is even more critical when dealing with multi-modal logics, i.e. logics expressing facts about systems with several transition relations. A prominent example is CTLK, a logic that reasons about temporal and epistemic properties of multi-agent systems. In general, model checkers produce linear counter-examples for failed properties, composed of a single computation path of the model. But some branching properties are only poorly and partially explained by a linear counter-example. This paper proposes richer counter-example structures called tree-like annotated counter-examples (TLACEs), for properties in Action-Restricted CTL (ARCTL), an extension of CTL quantifying paths restricted in terms of actions labeling transitions of the model. These counter-examples have a branching structure that supports more complete description of property violations. Elements of these counter-examples are annotated with parts of the property to give a better understanding of their structure. Visualization and browsing of these richer counter-examples become a critical issue, as the number of branches and states can grow exponentially for deeply-nested properties. This paper formally defines the structure of TLACEs, characterizes adequate counter-examples w.r.t. models and failed properties, and gives a generation algorithm for ARCTL properties. It also illustrates the approach with examples in CTLK, using a reduction of CTLK to ARCTL. The proposed approach has been implemented, first by extending the NuSMV model checker to generate and export branching counter-examples, secondly by providing an interactive graphical interface to visualize and browse them.Comment: In Proceedings IWIGP 2012, arXiv:1202.422

    Symbolic model checking of multi-modal logics : uniform strategies and rich explanations

    No full text
    Model checking is a verification technique that performs an exhaustive search among the states of safety-critical systems to check whether a given property is satisfied. These properties are usually expressed within a logic that captures different aspects of the system such as its evolution through time. Multi-modal logics mix several aspects of the system such as the knowledge and strategies of its agents. They usually are branching logics, that is, they can express properties about several successors of the states of interest. Nevertheless, logics to reason about the strategies of agents with an imperfect view of a system under fairness constraints have been seldom considered, and model-checking algorithms appeared only recently. In this thesis, we first define ATLK_irF, a multi-modal logic reasoning about time, knowledge and uniform strategies of concurrent agents under unconditional fairness constraints. This logic can be used to reason about multi-agent programs under the supervision of a fair scheduler. We then describe three approaches to solve its model-checking problem. They are all based on an explicit enumeration of the strategies. The first one simply enumerates and checks all uniform strategies of the agents. The second one limits this enumeration to partial strategies, and uses early termination and caching to improve its performances in practice. The last one performs a backward exploration of the system to directly build the winning strategies. Furthermore, we present variants of these approaches based on pre-filtering losing moves. Finally, we implement and compare them with state-of-the-art symbolic solutions. The experiments show that the different approaches outperform the others in different situations. Second, we attack the problem of generating and manipulating rich explanations for multi-modal logics. One of the main advantages of model checking is its capability to produce a counter-example showing why the checked property is violated. But multi-modal logics have rich and complex explanations, and state-of-the-art model checkers such as NuSMV provide only partial explanations. We thus present a μ-calculus based model-checking framework. The propositional μ-calculus is a logic integrating modal operators and least and greatest fixpoint operators. The goal of this framework is to help a designer to solve her top-level model-checking problem by translating it into μ-calculus. It integrates a μ-calculus model checker with rich explanations. It also integrates a set of functionalities to help the designer to translate these explanations back into the top-level language, such as formula aliases, a relational graph algebra, and choosers to guide the generation process. The framework is then used on the case of ATL, the standard logic to reason about the strategies of the agents of a system, to show its applicability.(FSA - Sciences de l'ingénieur) -- UCL, 201

    Verification of railway interlocking systems

    Get PDF
    In the railway domain, an interlocking is a computerised system that controls the railway signalling objects in order to allow a safe operation of the train traffic. Each interlocking makes use of particular data, called application data, that reflects the track layout of the station under control. The verification and validation of the application data are performed manually and is thus error-prone and costly. In this paper, we explain how we built an executable model in NuSMV of a railway interlocking based on the application data. We also detail the tool that we have developed in order to translate the application data into our model automatically. Finally we show how we could verify a realistic set of safety properties on a real-size station model by customizing the existing model-checking algorithm with PyNuSMV a Python library based on NuSMV
    corecore